-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Decrease block cache size, increase execution state cache size #4544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decrease block cache size, increase execution state cache size #4544
Conversation
| value_cache::ValueCache, | ||
| }; | ||
|
|
||
| const BLOCK_CACHE_SIZE: usize = 5_000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it really OOMed with 10_000 I'd do more than just halve it. I'd go down to 1_000 or even 100 maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle I agree, but the more we lower this the more we seem to be penalized in performance... So 5k seems like a good compromise
| use super::ValueCache; | ||
|
|
||
| /// Test cache size for unit tests. | ||
| const TEST_CACHE_SIZE: usize = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not 10? Does it really make a difference in any of the tests? Otherwise it will just make the tests slower.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, 10 should be fine. Keep in mind that the slowness we would get from this is from the initial memory allocation only, because if it's some long running test it would make the test faster, actually, as more things would be cached.
For these tests that trade-off doesn't seem to make sense though, so 10 indeed makes more sense.
|
We already have size-based cache in the repository: BlockCache in the Maybe as a follow-up though. |
73a8b47 to
d7b6485
Compare
38ab4bf to
7a2e2e9
Compare
d7b6485 to
62be5ef
Compare
7a2e2e9 to
15d2858
Compare
62be5ef to
f768ed5
Compare
15d2858 to
987f950
Compare
f768ed5 to
9f0c9c4
Compare
0dd0d2e to
d61401d
Compare
9f0c9c4 to
5bd75ef
Compare
5bd75ef to
fe923dd
Compare
d61401d to
9e25a21
Compare
fe923dd to
8be9c4e
Compare
9e25a21 to
3ab6fa5
Compare
8be9c4e to
91a76d1
Compare
3ab6fa5 to
513abd9
Compare
91a76d1 to
56ecc30
Compare
513abd9 to
8916c0b
Compare
Merge activity
|
## Motivation After a while, while benchmarking, some of our shards start OOMing. After inspecting memory profiles, it seems that the culprit is the `ValueCache` for blocks specifically. ## Proposal Lower the size of the cache so that it fills up and start evicting entries before making the VM run out of memory. ## Test Plan Deploy a network with this code, benchmark it for a while and make sure that memory goes flat ## Release Plan - Nothing to do / These changes follow the usual release cycle.

Motivation
After a while, while benchmarking, some of our shards start OOMing. After inspecting memory profiles, it seems that the culprit is the
ValueCachefor blocks specifically.Proposal
Lower the size of the cache so that it fills up and start evicting entries before making the VM run out of memory.
Test Plan
Deploy a network with this code, benchmark it for a while and make sure that memory goes flat
Release Plan